I see three areas that this project could evolve towards:
1) Group laptop improvisations – where the samples generated in the studio are shared among all performers in either free improvisation, or with each laptop performer designated a role (e.g. beats, melody, bass/drone, effects processing). The collective output of these performances could then easily be recorded as polished album pieces. A system for recording synchronization between all laptops could easily be implemented over a wireless network (these files would then have to be mixed down), or an instant master recording could be made through the PA mixer. 2) Laptops with shared samples (as above) blended with live processing of acoustic instruments via microphones. In this case the recording space and quality of microphones would become an important factor as to the quality of the final product – a limiting factor that a purely laptop ensemble would not have. 3) The final logical extension of this is not only processing the live audio stream, but also capturing and manipulating fragments of it in real-time during, and as part of, the performance. With software custom designed for this purpose, both the individual sample fragments and the master output of each laptop could be saved at the end of the recording session – thus resulting in instant polished pieces of music with little or no post-production necessary, as well as generating processed collections of samples that would require little or no editing. This third option would ideally be undertaken by ensembles that play together regularly, so that a feedback loop of samples from previous sessions could inform the development of future sessions. For a some time now I've had the long-term goal of creating a recording and sample-production setup that I could travel with, to be able to go anywhere in the world and record, sample and produce music - to experience a variety of cultures, atmospheres, people, instruments, musical styles, etc.
What interests me most about this idea is not just the recording interactions, but creating the music while in those places, and getting a performance/sampling feedback loop happening. Although this idea is not related to Streamland as it currently exists, nonetheless Streamland has laid the groundwork for at least the traveling production setup - laptop, headphones and NanoKontrol. A few compact portable mics and a compact mic stand are all that would be needed to fulfill the sampling aspect. Using USB mics could eliminate the need for an audio interface. Ideally, this is what I'd love to guide the Streamland concept towards. I shifted from my original sample editing plan - to use my software Wave Exchange to cut and process the samples - to instead continue to use Audition. Although Wave Exchange is much better for dynamic effects processing, I quickly realized that I could save most of that for the laptop improvisation phase, as simply editing the mixdowns into useable fragments was already a huge task. To give an example - a 10 minute recording yielded about 50 separate samples - equaling hours of post-production work of the stifling non-performative variety.
I since realized that a system could be easily devised which would allow me to process recordings into separate samples on the fly in the studio – at the same time the performer was playing: 1) I custom design software that would essentially provide me with as many separate audio buffers as I needed (creatable at the press of a button) which could be individually recorded to – with or without effects, and the effects could be manipulated by me (or potentially by the performer) in real-time as the performer was playing 2) I sit in the control room and signal to the performer in the studio to start, stop, fade in fade out, etc - in order to synchronize my recording with their separate gestures 3) At the end of the session, each separate buffer can be saved individually, and each of these samples would require minimal editing. 4) Thus musical gestures are immediately captured as individual samples, and the recording session becomes a collaborative improvisation. So later that day I modified one of my software instruments “Strange Creatures” to suit this purpose. With this, recording separate gestures to different buffers with input effects can be done with minimal setup, thus not overly interrupting the performative flow. Although this software won’t be used in this Uni phase of the project, I plan on using it for future collaborative Streamland studio sessions. |